Goto

Collaborating Authors

 random model



Reviews: The committee machine: Computational to statistical gaps in learning a two-layers neural network

Neural Information Processing Systems

The committee machine is a simple and natural model for a 2-layer neural network. The results of the paper also apply to many related models: you are allowed an arbitrary function mapping the K hidden values to the final binary output.) This paper studies the problem of learning the weights W under a natural random model. We are given m random examples (X,Y) where the input X (in R n) is iid Gaussian and Y (in { 1,-1}) is the associated output of the network. The unknown weights W are iid from a known prior.


Learning-Enhanced Neighborhood Selection for the Vehicle Routing Problem with Time Windows

Feijen, Willem, Schäfer, Guido, Dekker, Koen, Pieterse, Seppo

arXiv.org Artificial Intelligence

Large Neighborhood Search (LNS) is a universal approach that is broadly applicable and has proven to be highly efficient in practice for solving optimization problems. We propose to integrate machine learning (ML) into LNS to assist in deciding which parts of the solution should be destroyed and repaired in each iteration of LNS. We refer to our new approach as Learning-Enhanced Neighborhood Selection (LENS for short). Our approach is universally applicable, i.e., it can be applied to any LNS algorithm to amplify the workings of the destroy algorithm. In this paper, we demonstrate the potential of LENS on the fundamental Vehicle Routing Problem with Time Windows (VRPTW). We implemented an LNS algorithm for VRPTW and collected data on generated novel training instances derived from well-known, extensively utilized benchmark datasets. We trained our LENS approach with this data and compared the experimental results of our approach with two benchmark algorithms: a random neighborhood selection method to show that LENS learns to make informed choices and an oracle neighborhood selection method to demonstrate the potential of our LENS approach. With LENS, we obtain results that significantly improve the quality of the solutions.


Random Models for Fuzzy Clustering Similarity Measures

DeWolfe, Ryan, Andrews, Jeffery L.

arXiv.org Machine Learning

The Adjusted Rand Index (ARI) is a widely used method for comparing hard clusterings, but requires a choice of random model that is often left implicit. Several recent works have extended the Rand Index to fuzzy clusterings, but the assumptions of the most common random model is difficult to justify in fuzzy settings. We propose a single framework for computing the ARI with three random models that are intuitive and explainable for both hard and fuzzy clusterings, along with the benefit of lower computational complexity. The theory and assumptions of the proposed models are contrasted with the existing permutation model. Computations on synthetic and benchmark data show that each model has distinct behaviour, meaning that accurate model selection is important for the reliability of results.


Optimizing Multi-Domain Performance with Active Learning-based Improvement Strategies

Mahalingam, Anand Gokul, Shah, Aayush, Gulati, Akshay, Mascarenhas, Royston, Panduranga, Rakshitha

arXiv.org Artificial Intelligence

Improving performance in multiple domains is a challenging task, and often requires significant amounts of data to train and test models. Active learning techniques provide a promising solution by enabling models to select the most informative samples for labeling, thus reducing the amount of labeled data required to achieve high performance. In this paper, we present an active learning-based framework for improving performance across multiple domains. Our approach consists of two stages: first, we use an initial set of labeled data to train a base model, and then we iteratively select the most informative samples for labeling to refine the model. We evaluate our approach on several multi-domain datasets, including image classification, sentiment analysis, and object recognition. Our experiments demonstrate that our approach consistently outperforms baseline methods and achieves state-of-the-art performance on several datasets. We also show that our method is highly efficient, requiring significantly fewer labeled samples than other active learning-based methods. Overall, our approach provides a practical and effective solution for improving performance across multiple domains using active learning techniques.


MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices

Li, Kejie, Bian, Jia-Wang, Castle, Robert, Torr, Philip H. S., Prisacariu, Victor Adrian

arXiv.org Artificial Intelligence

High-quality 3D ground-truth shapes are critical for 3D object reconstruction evaluation. However, it is difficult to create a replica of an object in reality, and even 3D reconstructions generated by 3D scanners have artefacts that cause biases in evaluation. To address this issue, we introduce a novel multi-view RGBD dataset captured using a mobile device, which includes highly precise 3D ground-truth annotations for 153 object models featuring a diverse set of 3D structures. We obtain precise 3D ground-truth shape without relying on high-end 3D scanners by utilising LEGO models with known geometry as the 3D structures for image capture. The distinct data modality offered by high-resolution RGB images and low-resolution depth maps captured on a mobile device, when combined with precise 3D geometry annotations, presents a unique opportunity for future research on high-fidelity 3D reconstruction. Furthermore, we evaluate a range of 3D reconstruction algorithms on the proposed dataset. Project page: http://code.active.vision/MobileBrick/


Identifiability of Sparse Causal Effects using Instrumental Variables

Pfister, Niklas, Peters, Jonas

arXiv.org Machine Learning

Exogenous heterogeneity, for example, in the form of instrumental variables can help us learn a system's underlying causal structure and predict the outcome of unseen intervention experiments. In this paper, we consider linear models in which the causal effect from covariates $X$ on a response $Y$ is sparse. We provide conditions under which the causal coefficient becomes identifiable from the observed distribution. These conditions can be satisfied even if the number of instruments is as small as the number of causal parents. We also develop graphical criteria under which identifiability holds with probability one if the edge coefficients are sampled randomly from a distribution that is absolutely continuous with respect to Lebesgue measure and $Y$ is childless. As an estimator, we propose spaceIV and prove that it consistently estimates the causal effect if the model is identifiable and evaluate its performance on simulated data. If identifiability does not hold, we show that it may still be possible to recover a subset of the causal parents.


Introduction to Logistic Regression: Predicting Diabetes

#artificialintelligence

Data can be broadly divided into continuous data, those that can take an infinite number of points within a given range such as distance or time, and categorical/discrete data, which contain a finite number of points or categories within a given group of data such as payment methods or customer complaints. We have already seen examples of applying regression to continuous prediction problems in the form of linear regression where we predicted sales, but in order to predict categorical outputs we can use logistic regression. While we are still using regression to predict outcomes, the main aim of logistic regression is to be able to predict which category and observation belongs to rather than an exact value. Examples of questions which this method can be used for include: "How likely is a person to suffer from a disease (outcome) given their age, sex, smoking status, etc (variables/features)?" "How likely is this email to be spam?" "Will a student pass a test given some predictors of performance?".


Label noise detection under the Noise at Random model with ensemble filters

Moura, Kecia G., Prudêncio, Ricardo B. C., Cavalcanti, George D. C.

arXiv.org Artificial Intelligence

Label noise detection has been widely studied in Machine Learning because of its importance in improving training data quality. Satisfactory noise detection has been achieved by adopting ensembles of classifiers. In this approach, an instance is assigned as mislabeled if a high proportion of members in the pool misclassifies it. Previous authors have empirically evaluated this approach; nevertheless, they mostly assumed that label noise is generated completely at random in a dataset. This is a strong assumption since other types of label noise are feasible in practice and can influence noise detection results. This work investigates the performance of ensemble noise detection under two different noise models: the Noisy at Random (NAR), in which the probability of label noise depends on the instance class, in comparison to the Noisy Completely at Random model, in which the probability of label noise is entirely independent. In this setting, we investigate the effect of class distribution on noise detection performance since it changes the total noise level observed in a dataset under the NAR assumption. Further, an evaluation of the ensemble vote threshold is conducted to contrast with the most common approaches in the literature. In many performed experiments, choosing a noise generation model over another can lead to different results when considering aspects such as class imbalance and noise level ratio among different classes.


Personalized Cancer Diagnosis Using Machine Learning

#artificialintelligence

This is a case study on the personalized cancer diagnosis problem. Before diving deep into the issue, let us understand what are the challenges with cancer diagnosis and how machine learning can help in mitigating them. Note: This problem is taken from NIPS 2017 Competition and the details can be found using this link. Let us go through the current process first. In order to identify if a person has cancer or not, a specialist first creates a list of genetic variations that needs to be analyzed. He/she then searches for all the relevant evidences like published journals etc.